skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Doshi-Velez, Finale"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Personalising decision-making assistance to different users and tasks can improve human-AI team performance, such as by appropriately impacting reliance on AI assistance. However, people are different in many ways, with many hidden qualities, and adapting AI assistance to these hidden qualities is difficult. In this work, we consider a hidden quality previously identified as important: overreliance on AI assistance. We would like to (i) quickly determine the value of this hidden quality, and (ii) personalise AI assistance based on this value. In our first study, we introduce a few probe questions (where we know the true answer) to determine if a user is an overrelier or not, finding that correctly-chosen probe questions work well. In our second study, we improve human-AI team performance, personalising AI assistance based on users’ overreliance quality. Exploratory analysis indicates that people learn different strategies of using AI assistance depending on what AI assistance they saw previously, indicating that we may need to take this into account when designing adaptive AI assistance. We hope that future work will continue exploring how to infer and personalise to other important hidden qualities. 
    more » « less
    Free, publicly-accessible full text available December 2, 2026
  2. Free, publicly-accessible full text available August 5, 2026
  3. Free, publicly-accessible full text available April 24, 2026
  4. Free, publicly-accessible full text available April 26, 2026
  5. In settings where an AI agent nudges a human agent toward a goal, the AI can quickly learn a high-quality policy by modeling the human well. Despite behavioral evidence that humans hyperbolically discount future rewards, we model human as Markov Decision Processes (MDPs) with exponential discounting. This is because planning is difficult with non-exponential discounts. In this work, we investigate whether the performance benefits of modeling humans as hyperbolic discounters outweigh the computational costs. We focus on AI interventions that change the human's discounting (i.e. decreases the human's "nearsightedness" to help them toward distant goals). We derive a fixed exponential discount factor that can approximate hyperbolic discounting, and prove that this approximation guarantees the AI will never miss a necessary intervention. We also prove that our approximation causes fewer false positives (unnecessary interventions) than the mean hazard rate, another well-known method for approximating hyperbolic MDPs as exponential ones. Surprisingly, our experiments demonstrate that exponential approximations outperform hyperbolic ones in online learning, even when the ground-truth human MDP is hyperbolically discounted. 
    more » « less
    Free, publicly-accessible full text available May 9, 2026
  6. Free, publicly-accessible full text available March 24, 2026
  7. We provide new connections between two distinct federated learning approaches based on (i) ADMM and (ii) Variational Bayes (VB), and propose new variants by combining their complementary strengths. Specifically, we show that the dual variables in ADMM naturally emerge through the "site" parameters used in VB with isotropic Gaussian covariances. Using this, we derive two versions of ADMM from VB that use flexible covariances and functional regularisation, respectively. Through numerical experiments, we validate the improvements obtained in performance. The work shows connection between two fields that are believed to be fundamentally different and combines them to improve federated learning. 
    more » « less
    Free, publicly-accessible full text available January 22, 2026
  8. Free, publicly-accessible full text available April 25, 2026
  9. In settings where users both need high accuracy and are time-pressured, such as doctors working in emergency rooms, we want to provide AI assistance that both increases decision accuracy and reduces decision-making time. Current literature focusses on how users interact with AI assistance when there is no time pressure, finding that different AI assistances have different benefits: some can reduce time taken while increasing overreliance on AI, while others do the opposite. The precise benefit can depend on both the user and task. In time-pressured scenarios, adapting when we show AI assistance is especially important: relying on the AI assistance can save time, and can therefore be beneficial when the AI is likely to be right. We would ideally adapt what AI assistance we show depending on various properties (of the task and of the user) in order to best trade off accuracy and time. We introduce a study where users have to answer a series of logic puzzles. We find that time pressure affects how users use different AI assistances, making some assistances more beneficial than others when compared to no-time-pressure settings. We also find that a user’s overreliance rate is a key predictor of their behaviour: overreliers and not-overreliers use different AI assistance types differently. We find marginal correlations between a user’s overreliance rate (which is related to the user’s trust in AI recommendations) and their personality traits (Big Five Personality traits). Overall, our work suggests that AI assistances have different accuracy-time tradeoffs when people are under time pressure compared to no time pressure, and we explore how we might adapt AI assistances in this setting. 
    more » « less